50 research outputs found

    Multi-Level Representation of Gesture as Command for Human Computer Interaction

    Get PDF
    oai:ojs.cai.ui.sav.sk:article/16The paper addresses the multiple forms of representation that human gesture takes at different levels for human computer interaction, ranging from gesture acquisition to mathematical model for analysis, pattern for recognition, record for database up to end-level application event triggers. A mathematical model for gesture as command is presented. We equally identify and provide particular models for four different types of gestures by considering both posture information and underlying motion trajectories. The problem of constructing gesture dictionaries is further addressed by taking into account similarity measures and dictionary discriminative features

    Gesture-based interfaces for INTEROB: interacting with information and robotics systems

    Get PDF
    We discuss in this paper several implementations of computer vision applications that were developed in the last two years in our laboratory and for which gesture-based interactions were introduced. The aim is to provide enhanced human-computer interfaces for several commonly encountered application scenarios: manipulating virtual objects and working inside virtual environments, playing computer games and interacting with robotic systems. We particularly focused on table-based systems which allow natural and intuitive interactions as they transform into comfortable and familiar interfaces

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    A Newcomer's Guide to EICS, the Engineering Interactive Computing Systems Community

    Full text link
    [EN] Welcome to EICS, the Engineering Interactive Computing Systems community, PACMHCI/EICS journal, and annual conference! In this short article, we introduce newcomers to the field and to our community with an overview of what EICS is and how it positions with respect to other venues in Human-Computer Interaction, such as CHI, UIST, and IUI, highlighting its legacy and paying homage to past scientific events from which EICS emerged. We also take this opportunity to enumerate and exemplify scientific contributions to the field of Engineering Interactive Computing Systems, which we hope to guide researchers and practitioners towards making their future PACMHCI/EICS submissions successful and impactful in the EICS community.We acknowledge the support of MetaDev2 as the main sponsor of EICS 2019. We would like to thank the Chairs of all the tracks of the EICS 2019 conference, the members of the local organization team, and the web master of the EICS 2019 web site. EICS 2019 could not have been possible without the commitment of the Programme Committee members and external reviewers. This work was partially supported by the Spanish Ministry of Economy, Industry and Competitiveness, State Research Agency / European Regional Development Fund under Vi-SMARt (TIN2016-79100-R), the Junta de Comunidades de Castilla-La Mancha European Regional Development Fund under NeUX (SBPLY/17/180501/000192) projects, the Generalitat Valenciana through project GISPRO (PROMETEO/2018/176), and the Spanish Ministry of Science and Innovation through project DataME (TIN2016-80811-P).López-Jaquero, VM.; Vatavu, R.; Panach, JI.; Pastor López, O.; Vanderdonckt, J. (2019). A Newcomer's Guide to EICS, the Engineering Interactive Computing Systems Community. Proceedings of the ACM on Human-Computer Interaction. 3:1-9. https://doi.org/10.1145/3300960S193Bastide, R., Palanque, P., & Roth, J. (Eds.). (2005). Engineering Human Computer Interaction and Interactive Systems. Lecture Notes in Computer Science. doi:10.1007/b136790Beaudouin-Lafon, M. (2004). Designing interaction, not interfaces. Proceedings of the working conference on Advanced visual interfaces - AVI ’04. doi:10.1145/989863.989865Bodart, F., & Vanderdonckt, J. (Eds.). (1996). Design, Specification and Verification of Interactive Systems ’96. Eurographics. doi:10.1007/978-3-7091-7491-3Gallud, J. A., Tesoriero, R., Vanderdonckt, J., Lozano, M., Penichet, V., & Botella, F. (2011). Distributed user interfaces. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems - CHI EA ’11. doi:10.1145/1979742.1979576Graham, T. C. N., & Palanque, P. (Eds.). (2008). Interactive Systems. Design, Specification, and Verification. Lecture Notes in Computer Science. doi:10.1007/978-3-540-70569-7Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’09. (2009). doi:10.1145/1570433Lawson, J.-Y. L., Vanderdonckt, J., & Vatavu, R.-D. (2018). Mass-Computer Interaction for Thousands of Users and Beyond. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. doi:10.1145/3170427.3188465Lozano, M. D., Galllud, J. A., Tesoriero, R., Penichet, V. M. R., Vanderdonckt, J., & Fardoun, H. (2013). 3rd workshop on distributed user interfaces. Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’13. doi:10.1145/2494603.2483222Proceedings of the 2014 Workshop on Distributed User Interfaces and Multimodal Interaction - DUI ’14. (2014). doi:10.1145/2677356Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. (2019). doi:10.1145/3319499Tesoriero, R., Lozano, M., Vanderdonckt, J., Gallud, J. A., & Penichet, V. M. R. (2012). distributed user interfaces. CHI ’12 Extended Abstracts on Human Factors in Computing Systems. doi:10.1145/2212776.2212704Vanderdonckt, J. (2005). A MDA-Compliant Environment for Developing User Interfaces of Information Systems. Active Flow and Combustion Control 2018, 16-31. doi:10.1007/11431855_2Vatavu, R.-D. (2012). User-defined gestures for free-hand TV control. Proceedings of the 10th European conference on Interactive tv and video - EuroiTV ’12. doi:10.1145/2325616.2325626Vatavu, R.-D. (2017). Beyond Features for Recognition: Human-Readable Measures to Understand Users’ Whole-Body Gesture Performance. International Journal of Human–Computer Interaction, 33(9), 713-730. doi:10.1080/10447318.2017.1278897Wobbrock, J. O., & Kientz, J. A. (2016). Research contributions in human-computer interaction. Interactions, 23(3), 38-44. doi:10.1145/290706

    Vatavu, Radu-Daniel

    No full text

    Acquisition temps réel de la gestuelle humaine pour l'interaction en réalité virtuelle

    No full text
    La thèse traite du problème de la reconnaissance des gestes avec des accents particuliers orientés vers la modélisation des trajectoires de mouvement ainsi que vers l’estimation de la variabilité présente dans l’exécution gestuelle. Les gestes sont acquis dans un scénario typique pour la vision par ordinateur qui approche les particularités des surfaces interactives. On propose un modèle flexible pour les commandes gestuelles à partir d’une représentation par courbes splines et des analogies avec des éléments de la théorie d’élasticité de la physique classique. On utilise les propriétés du modèle pour la reconnaissance des gestes dans un contexte d’apprentissage supervisé. Pour adresser le problème de la variation présente dans l’exécution des gestes, on propose un modèle qui mesure dans une manière quantitative et objective les tendances locales que les utilisateurs sont tentés d'introduire dans leurs exécutions. On utilise ce modèle pour proposer une solution à un problème reconnu comme difficile dans la communauté : la segmentation automatique des trajectoires continues de mouvement et l’identification invariante a l’échelle des commands gestuelles. On démontre aussi l’efficacité du modèle pour effectuer des analyses de type ergonomique pour les dictionnaires de gestes.We address in this thesis the problem of gesture recognition with specific focus on providing a flexible model for movement trajectories as well as for estimating the variation in execution that is inherently present when performing gestures. Gestures are captured in a computer vision scenario which approaches somewhat the specifics of interactive surfaces. We propose a flexible model for gesture commands based on a spline representation which is enhanced with elastic properties in a direct analogy with the theory of elasticity from classical physics. The model is further used for achieving gesture recognition in the context of supervised learning. In order to address the problem of variability in execution, we propose a model that measures objectively and quantitatively the local tendencies that users introduce in their executions. We make use of this model in order to address a problem that is considered hard by the community: automatic segmentation of continuous motion trajectories and scale invariant identification of gesture commands. We equally show the usefulness of our model for performing ergonomic analysis on gesture dictionaries

    Extensible, Extendable, Expandable, Extractable: The 4E Design Approach for Reconfigurable Displays

    No full text
    We introduce “4E,” a new design approach of reconfigurable displays that can change their form factors by capitalizing on four quality properties inspired by applied material: extensibility, extendability, expandability, and extractability. This approach is applicable to both fixed and portable displays. We define and exemplify each property, highlighting the key differences in how reconfigurable displays can change their form factors to accommodate more screen real estate for more users, applications, and functionality. To demonstrate the 4E approach, we conduct a targeted literature review and introduce E3Screen, a prototype that enhances any flat screen (e.g., of a tablet, laptop, monitor) with two slidable, rotatable, and foldable lateral displays. We report results from a controlled experiment with N=103 participants, conducted to collect, analyze, and understand end users’ preferences for display configurations permitted by the extendability, expandability, and extractability of E3Screen. Our results structure future research and development in reconfigurable displays and multi-display collaborative workspaces
    corecore